|
|
|
Market Roundup August 23, 2002 |
|
|
|
IBM/HP Exchange Storage APIs, Sun
Announces SAN Management SW Platform Computing, Entropia
Announce Grid Computing Wins SNIA Announces Storage Management
Initiative |
IBM/HP Exchange Storage APIs, Sun
Announces SAN Management SW IBM and Hewlett Packard have announced an agreement
that the companies said serves as a stepping stone to the delivery of a
standards-based storage management platform based on Common Information
Management (CIM) and Bluefin specifications adopted by the Storage Networking
Industry Association. Under the terms of the agreement, IBM and HP are
cross-licensing storage APIs and command line interfaces that will enable
IBM’s storage management software to manage the HP StorageWorks Enterprise
Virtual Array and Enterprise Modular Array systems, and HP’s OpenView storage
management software to manage IBM’s TotalStorage Enterprise Storage Server
(code-named Shark). No availability information was included in the
announcement. In a separate announcement, Sun Microsystems introduced Sun
StorEdge Enterprise Storage Manager (ESM) software for SANs, which the
company said provides a centralized, Web-accessible platform for viewing and
managing storage environments. Sun described itself as the first vendor to
deliver a Web Based Enterprise Management (WBEM) and CIM-compliant SAN
management platform, and stated that such products will enable customers to
leverage new storage technologies from Sun and other vendors. The new Sun ESM
software is built on the Sun ONE platform, is available on Solaris, and is
Web-accessible from Linux, HP-UX and other hosts. Sun also noted that ESM is
supported by the company’s storage partners including HDS, Qlogic, and
Brocade. Sun ESM is currently available starting at $15,000 and follows a
tiered, capacity-based model. First, we should say that we applaud any and all
storage vendor efforts that lead toward a future where standards-based
storage management is easy, the software is rich, and everybody’s storage
environments are good looking. That said, we believe Sun’s new ESM product
and the IBM/HP agreement view the often thorny world of storage management
through very different sorts of glasses. On Sun’s rosy side of the street,
though ESM may be the industry’s first CIM and WEBM-compliant platform (a
statement some other vendors may quibble with), it should be remembered just
what that means in practical terms. CIM and WEBM are emerging in part due to
storage customers’ demands for tools to better manage heterogeneous storage
environments, and seek to assist that process with a single set of standards
storage management software can be written to. Good enough. But to work
properly, CIM-compliant products from multiple vendors need to be available.
In other words, EMS provides Sun and its partner buddies short-term bragging
rights for being first out of the box, but the new software is unlikely to
offer much succor to heterogeneously-beleaguered clients anytime soon. In all, we believe the IBM/HP deal represents a clearer view of where enterprise storage currently resides. Beginning last winter with EMC’s push to share storage management APIs as an adjunct to its WideSky initiative, the industry has seen a pair of high-profile deals completed (first between EMC and Compaq, and more recently with HP’s recent extension of the deal to encompass its own products). The IBM/HP deal qualifies as this week’s big news, but we would be far from surprised to see the momentum stop here. Perhaps even Sun and HDS (which have both been conspicuous by their absence) might get into the API swim. Do API deals represent a viable long-term strategy for storage vendors? Not really, but we believe they can offer short-term assistance to many clients. The fact is that while CIM, WBEM, and Bluefin may represent the best of all possible future storage worlds, API sharing agreements have the potential to help out real world customers with real world problems today. For that reason, while we applaud Sun for making EMS CIM- and WBEM-compliant, we also cheer IBM and HP’s decision to work together in offering their customers more immediate heterogeneous relief. |
|
According to a new study by InfoTrends Research
Group, the number of wireless imaging users worldwide is forecast to grow
from 6.6 million in 2002 to over 160 million by 2007. At present 98% of all
wireless imaging users reside in Japan, and InfoTrends reported that 5+
million people in Japan carry cell phones with embedded digital cameras. Multimedia
messaging services (MMS) are being deployed in Europe, and in North America,
insurance agents are sending images wirelessly from the field for inclusion
in claims documents. In a separate announcement, a new IDC report projects
that worldwide shipments of handsets and PDAs integrating digital imaging
will reach 151 million in 2006, based upon worldwide shipments of
imaging-enabled handhelds of 617,440 units in 2002, rising to 11.1 million in
2006. The market has borne witness to several
announcements and marketing campaigns by wireless service provides as of
late. Cingular has announced rate plans with minutes that rollover, Verizon
is creating more no-charge roaming and long distance plans, and Sprint PCS is
flooding the airwaves with messages about the glory of sending and receiving
pictures on the cell phone. If seeing is believing, one might assume that
everyone is using the cell phone to cut in line at night clubs, transmit
pickup lines at parties based upon the contents of the cheese tray, and
document the exceptionally photogenic lifestyle of cell phone users such as
themselves in real time. But all this comes on the heels of previous wireless
nirvanas: 3G, WAP, and the Internet, whose uptake could be described as
underwhelming. So why the new push for wireless imaging? With the current economic malaise continuing to linger, it appears that service providers are attempting to garner new markets through service differentiation and flashy new capabilities. But given the abysmal failure of previous wireless innovations, it is hard to imagine, at least in North America, that simply providing new technology or capabilities will drive substantial long-term use. Although one could argue that a claims adjuster could take a digital image and forward it in real time to the office, as opposed to using a digital or traditional 35mm camera, is there a recognizable business case for doing so? Perhaps one could send a photo to Grandma on the cell phone, but is the small size of the image compatible with Grandma’s fading eyesight? It seems to us that the current “if you build it, they will come,” mentality is ill-advised given the current market doldrums. However, these services may provide a value to the service provider, if not the customer. The advent of rollover minutes, which may be a welcome notion to users, is a latent threat to providers. If this becomes an industry norm, and users accrue sufficient numbers of unused minutes, they may choose to pare back rate plans. Given the lowering or eliminating of long distance and roaming fees in many rate plans, such additional revenue losses could prove painful or even fatal to service providers. But if users can be convinced to try new services such as photo imaging and MMS that use up their buckets of accruing minutes, this could help stem whatever potential dangers service providers might suffer. However, to our way of thinking this vision is little more than a particularly noxious pipe dream. Face facts. The notion of 150+ million cell phone-equipped photographers and image-hungry viewers appearing during the next five years seems as likely as widespread deployment of personal jet-packs for transportation and atomic-powered shaving kits. |
|
Platform Computing, Entropia
Announce Grid Computing Wins Platform Computing announced this week that
Bristol-Meyers Squibb has selected Platform ActiveCluster to build an
enterprise desktop grid for life sciences research. ActiveCluster will
initially be implemented on several thousand desktops in Bristol-Meyers
research sites in North America and integrated with Linux servers to create a
virtual computing infrastructure the company believes will help reduce time
to drug discovery. According to Platform, the company’s solutions are being
used by 60% of the top twenty global pharmaceutical companies, and the
Bristol-Meyers project is similar to other ActiveCluster installations at
life sciences organizations including Princess Margaret Hospital, Entelos,
Inc. and the France Telethon Decryption project. In an unrelated announcement,
Entropia and the San Diego Supercomputer Center (SDSC) announced the
successful deployment of Entropia’s DCGrid solution on desktop PCs at the
Center. SDSC’s first use of DCGrid applied GAMESS (General Atomic Electronic
Structure Systems) software to compute molecular structure and properties
with the goal of populating databases
for purposes including analysis of bioterrorism agents. According to SDSC,
DCGrid provided researchers the means to securely automate a large quantity
of strategic, high-throughput calculations, allowing them to apply the GAMESS
code on a wider range of molecular structures than would have been possible
otherwise. No source code changes to GAMESS were needed for deployment on
DCGrid. As we noted in our recent Sageza Competitive Review, Grid Computing: Contender or Pretender, grid computing has stirred up a great deal of interest, to the point where some claim it to be a likely successor to the Internet. Along with serving as an intellectual ping pong ball among IT industry pundits and other assorted blowhards, grid solutions have also been the prime subjects and enabling technologies behind high-profile, high-end computing projects sponsored by, among many others, NASA, NSF, CERN, and the DoE. Weighty stuff, indeed, so how do PC-based grid solutions by Platform and Entropia fit in among such heavy hitters? It should be remembered that at its essence, grid is really about harnessing as much power as possible in any given computing environment. One of the earliest iterations of grid was (and still can be) seen in the Search for ExtraTerrestrial Intelligence (SETI) @ Home project, which has been enabled by tens of thousands of volunteers worldwide who donate cycles on their home and workplace computers to crunch data collected from deep space by radio telescopes. Similar in form to the SETI project, Platform ActiveCluster and Entropia’s DCGrid offer highly automated methodologies for securely coordinating, distributing, collecting, and collating batches of data for processing on PCs across enterprise networks. As such, companies like Bristol-Meyers and organizations like the SDSC can better leverage all of their CPUs, not just those in their data centers. PC-based grids may not grab as many headlines as high profile installations, but to our way of thinking they prove that well-designed grid solutions are helping to solve real-world problems by enabling enterprises to wring every bit of processing power out of their hardware investments. That qualifies as both smart computing and smart business. |
|
IBM has announced it will build the Capital Wireless
Integrated Network (CapWIN), a $20 million data network linking more than forty
local state and federal agencies in the Washington D.C. metro area. The
CapWIN is designed to let firefighters, police, transportation officials, and
other emergency responders swap data over PCs, PDAs, and data-enabled cell
phones during large-scale emergencies such as natural disasters, accidents,
or terrorist incidents. The new system is designed to bridge the
communications gap created by the myriad of different communications systems
used by these different agencies as well as providing access to data from
local state and federal databases not accessible to many of the participating
agencies at this time. When CapWIN is complete, users will be able to set up
secure chat rooms and use instant messaging to communicate. The network is
designed to handle 10,000 people making 7,500 transactions per minute and is
expected to be largely completed with the next year. File this one under the “don’t expect technology
alone to solve every problem” file. It seems clear that the Washington D.C. area
has more than its fair share of overlapping agencies that respond to crisis
situations and that this network may alleviate some of the poor
communications between these various entities. The ability to swap data and
effectively communicate in a heterogeneous, chaotic environment aides and
abets response efforts. While not building such a capability in our nations
capital would be rightfully seen as being irresponsible, so would the idea
that this network — as presently articulated — will alone solve crisis
situation communications breakdowns. Much of the recent technology bubble was fueled by enterprises purchasing large amounts of hardware, software, and network capacity without clear, concise, measurable goals and objectives for that investment. Not only was this money spent wantonly, it was spent foolishly. If the various agencies participating in the CapWIN project are going to do more than just waste money on a technology investment, they will have to set up clear objectives for the CapWIN system, like usage priorities, access coordination, and overall management of the system itself. While the system is designed to be useful in a variety of crisis systems, its conception and realization are the product of the terrorist attacks of last fall. Given that nearly every federal agency in Washington now has some role to play in the War on Terror, one can foresee that all would like to have access to the CapWIN system in times of emergency. Without a clear plan concerning who will have access, with what priorities, the CapWIN system could end up looking like a substantial portion of the technology investment made by the private sector in the past five years, dusty and unused. |
|
SNIA Announces Storage Management
Initiative Recently the Storage Networking Industry Association
(SNIA) announced the Storage Management Initiative (SMI), based upon the
Bluefin draft specification. The initiative is designed to develop,
standardize, and drive adoption of open storage management interfaces,
hopefully leading to greater management interoperability amongst storage
products. The standards and interfaces will be built upon the Distributed
Management Task Force (DMTF)’s Web-based Enterprise Management (WBEM)
architecture and the Common Information Model (CIM). The SMI will incorporate
ongoing work from the various SNIA Technical Work Groups, as well as that of
member companies. The group intends to create an interface for discovering,
managing, and monitoring devices that run on a storage network. In addition
to development work, SNIA intends to actively promote the new standard
through education and marketing, as well as to provide testing and
interoperability suites for vendors to validate their products and demonstrations
where vendors can display their products interoperating. The SNIA was formed in America in 1997, and has
since spread to Europe and Asia. With over seventy members in the U.S. and
over forty-five members in Europe, the organization reads like a Who’s Who of
the storage industry. On one hand this is both impressive and reassuring. Impressive
to see so many competing organizations working together to bring the industry
closer, and reassuring to know that warring specifications are less likely to
form as most everyone is participating in the venture on some level. On the
other hand, conventional wisdom reminds us that the camel is famously known
as being a horse designed by a committee. In other words, when too many
participants get involved in a project, the resulting outcome is somehow less
a whole than the sum of its parts. The argument for forming these standards
as a group is that the alternative is more painful than the proposed solution.
Vendors are no longer capable of the amount of investment required to test and
bring products to market in a reasonable timeframe. Startups with good
products face steep barriers to entry into highly fragmented markets. The
other alternative of consolidation behind a single vendor such as EMC or CA,
who have similar open management alternatives, has also been less successful.
SNIA in effect is serving as the open source community driver for the storage
industry, and so far the industry seems willing to work this way. In
addition, several companies will probably have SMI-compliant product out
before the final specification is released, so it does not appear that the
committee approach is indeed hampering vendor’s creative efforts. In addition
to the specification itself, SNIA has committed itself to a high degree of
member and user community education, as well as to preparing test suites and
opportunities for vendors to bring proof of concept to end users. This
collaboration — extending from development to marketing and education, to
testing and implementation — bodes well for the success of the venture. The storage industry stands at an inflection point. Various technologies co-exist currently, some of which are complementary and some of which compete. The market has yet to choose the winners and vendors are having little luck forcing the technologies of their choice. The SMI initiative does not address these issues. It is merely the first in a long line of standards that must be embraced and adopted by the community. SNIA has its work cut out for it, but the degree of its success will bring relief for end users and accelerated adoption of new technologies for vendors. Sageza sees this as a win for everyone. |